Longest increasing subsequence

In computer science, the longest increasing subsequence problem is to find a subsequence of a given sequence in which the subsequence elements are in sorted order, lowest to highest, and in which the subsequence is as long as possible. This subsequence is not necessarily contiguous, or unique. Longest increasing subsequences are studied in the context of various disciplines related to mathematics, including algorithmics, random matrix theory, representation theory, and physics.[1] The longest increasing subsequence problem is solvable in time O(n log n), where n denotes the length of the input sequence.[2]

Contents

Example

In the binary Van der Corput sequence

0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15, …

a longest increasing subsequence is

0, 2, 6, 9, 13, 15.

This subsequence has length six; the input sequence has no seven-member increasing subsequences. The longest increasing subsequence in this example is not unique: for instance,

0, 4, 6, 9, 11, 15

is another increasing subsequence of equal length in the same input sequence.

Relations to other algorithmic problems

The longest increasing subsequence problem is closely related to the longest common subsequence problem, which has a quadratic time dynamic programming solution: the longest increasing subsequence of a sequence S is the longest common subsequence of S and T, where T is the result of sorting S. However, for the special case in which the input is a permutation of the integers 1, 2, ..., n, this approach can be made much more efficient, leading to time bounds of the form O(n log log n).[3]

The largest clique in a permutation graph is defined by the longest decreasing subsequence of the permutation that defines the graph; the longest decreasing subsequence is equivalent in computational complexity, by negation of all numbers, to the longest increasing subsequence. Therefore, longest increasing subsequence algorithms can be used to solve the clique problem efficiently in permutation graphs.[4]

Efficient algorithms

The algorithm outlined below solves the longest increasing subsequence problem efficiently, using only arrays and binary searching. It processes the sequence elements in order, maintaining the longest increasing subsequence found so far. Denote the sequence values as X[1], X[2], etc. Then, after processing X[i], the algorithm will have stored values in two arrays:

M[j] — stores the position k of the smallest value X[k] such that there is an increasing subsequence of length j ending at X[k] on the range ki (note we have jki here).
P[k] — stores the position of the predecessor of X[k] in the longest increasing subsequence ending at X[k].

In addition the algorithm stores a variable L representing the length of the longest increasing subsequence found so far.

Note that, at any point in the algorithm, the sequence

X[M[1]], X[M[2]], ..., X[M[L]]

is nondecreasing. For, if there is an increasing subsequence of length i ending at X[M[i]], then there is also a subsequence of length i-1 ending at a smaller value: namely the one ending at X[P[M[i]]]. Thus, we may do binary searches in this sequence in logarithmic time.

The algorithm, then, proceeds as follows.

 L = 0
 for i = 1, 2, ... n:
    binary search for the largest positive j ≤ L
      such that X[M[j]] < X[i] (or set j = 0 if no such value exists)
    P[i] = M[j]
    if j == L or X[i] < X[M[j+1]]:
       M[j+1] = i
       L = max(L, j+1)

The result of this is the length of the longest sequence in L. The actual longest sequence can be found by backtracking through the P array: the last item of the longest sequence is in X[M[L]], the second-to-last item is in X[P[M[L]]], etc. Thus, the sequence has the form

..., X[P[P[M[L]]]], X[P[M[L]]], X[M[L]].

Because the algorithm performs a single binary search per sequence element, its total time can be expressed using Big O notation as O(n log n). Fredman (1975) discusses a variant of this algorithm, which he credits to Donald Knuth; in the variant that he studies, the algorithm tests whether each value X[i] can be used to extend the current longest increasing sequence, in constant time, prior to doing the binary search. With this modification, the algorithm uses at most n log2 nn log2log2 n + O(n) comparisons in the worst case, which is optimal for a comparison-based algorithm up to the constant factor in the O(n) term.[5]

Length bounds

According to the Erdős–Szekeres theorem, any sequence of n2+1 distinct integers has either an increasing or a decreasing subsequence of length n + 1.[6][7] For inputs in which each permutation of the input is equally likely, the expected length of the longest increasing subsequence is approximately 2√n. [8] In the limit as n approaches infinity, the length of the longest increasing subsequence of a randomly permuted sequence of n items has a distribution approaching the Tracy–Widom distribution, the distribution of the largest eigenvalue of a random matrix in the Gaussian unitary ensemble.[9]

Online algorithms

The longest increasing subsequence has also been studied in the setting of online algorithms, in which the elements of a permutation are presented one at a time to an algorithm that must decide whether to include or exclude each element, without knowledge of the ordering of later elements. In this variant of the problem, it is possible to devise a selection procedure that, when given a random permutation as input, will generate an increasing sequence with expected length approximately √(2n). [10] More precise results (including the variance) are known for the corresponding problem in the setting of a Poisson arrival process.[11]

See also

References

  1. ^ Aldous, David; Diaconis, Persi (1999), "Longest increasing subsequences: from patience sorting to the Baik–Deift–Johansson theorem", Bulletin of the American Mathematical Society 36 (04): 413–432, doi:10.1090/S0273-0979-99-00796-X .
  2. ^ *Schensted, C. (1961), "Longest increasing and decreasing subsequences", Canadian Journal of Mathematics (Canadian Mathematical Society) 13: 179–191, doi:10.4153/CJM-1961-015-3, ISSN 0008-414X, MR0121305, http://books.google.com/?id=G3sZ2zG8AiMC&pg=PA179 
  3. ^ Hunt, J.; Szymanski, T. (1977), "A fast algorithm for computing longest common subsequences", Communications of the ACM 20 (5): 350–353, doi:10.1145/359581.359603. 
  4. ^ Golumbic, M. C. (1980), Algorithmic Graph Theory and Perfect Graphs, Computer Science and Applied Mathematics, Academic Press, p. 159 .
  5. ^ Fredman, Michael L. (1975), "On computing the length of longest increasing subsequences", Discrete Mathematics 11 (1): 29–35, doi:10.1016/0012-365X(75)90103-X .
  6. ^ Erdős, Paul; Szekeres, George (1935), "A combinatorial problem in geometry", Compositio Mathematica 2: 463–470, http://www.numdam.org/item?id=CM_1935__2__463_0 .
  7. ^ Steele, J. Michael (1995), "Variations on the monotone subsequence theme of Erdős and Szekeres", in Aldous, David; Diaconis, Persi; Spencer, Joel et al., Discrete Probability and Algorithms, IMA Volumes in Mathematics and its Applications, 72, Springer-Verlag, pp. 111–131, http://www-stat.wharton.upenn.edu/~steele/Publications/PDF/VOTMSTOEAS.pdf .
  8. ^ Vershik, A. M.; Kerov, C. V. (1977), "Asymptotics of the Plancheral measure of the symmetric group and a limiting form for Young tableaux", Dokl. Akad. Nauk USSR 233: 1024–1027 .
  9. ^ Baik, Jinho; Deift, Percy; Johansson, Kurt (1999), "On the distribution of the length of the longest increasing subsequence of random permutations", Journal of the American Mathematical Society 12 (4): 1119–1178, arXiv:math/9810105, doi:10.1090/S0894-0347-99-00307-0 .
  10. ^ Samuels, Stephen. M.; Steele, J. Michael (1981), "Optimal Sequential Selection of a Monotone Sequence From a Random Sample", Ann. Probab. 9 (6): 937–947, doi:10.1214/aop/1176994265 
  11. ^ Bruss, F. Thomas; Delbaen, Freddy (2004), "A central limit theorem for the optimal selection process for monotone subsequences of maximum expected length", Stochastic Processes and their Applications 114 (2): 287–311, doi:10.1016/j.spa.2004.09.002 .

External links